120 research outputs found

    Numerical Simulation of the 9-10 June 1972 Black Hills Storm Using CSU RAMS

    Get PDF
    Strong easterly flow of low-level moist air over the eastern slopes of the Black Hills on 9-10 June 1972 generated a storm system that produced a flash flood, devastating the area. Based on observations from this storm event, and also from the similar Big Thompson 1976 storm event, conceptual models have been developed to explain the unusually high precipitation efficiency. In this study, the Black Hills storm is simulated using the Colorado State University Regional Atmospheric Modeling System. Simulations with homogeneous and inhomogeneous initializations and different grid structures are presented. The conceptual models of storm structure proposed by previous studies are examined in light of the present simulations. Both homogeneous and inhomogeneous initialization results capture the intense nature of the storm, but the inhomogeneous simulation produced a precipitation pattern closer to the observed pattern. The simulations point to stationary tilted updrafts, with precipitation falling out to the rear as the preferred storm structure. Experiments with different grid structures point to the importance of removing the lateral boundaries far from the region of activity. Overall, simulation performance in capturing the observed behavior of the storm system was enhanced by use of inhomogeneous initialization

    Bistable Gradient Networks II: Storage Capacity and Behaviour Near Saturation

    Full text link
    We examine numerically the storage capacity and the behaviour near saturation of an attractor neural network consisting of bistable elements with an adjustable coupling strength, the Bistable Gradient Network (BGN). For strong coupling, we find evidence of a first-order "memory blackout" phase transition as in the Hopfield network. For weak coupling, on the other hand, there is no evidence of such a transition and memorized patterns can be stable even at high levels of loading. The enhanced storage capacity comes, however, at the cost of imperfect retrieval of the patterns from corrupted versions.Comment: 15 pages, 12 eps figures. Submitted to Phys. Rev. E. Sequel to cond-mat/020356

    Massively parallel computing on an organic molecular layer

    Full text link
    Current computers operate at enormous speeds of ~10^13 bits/s, but their principle of sequential logic operation has remained unchanged since the 1950s. Though our brain is much slower on a per-neuron base (~10^3 firings/s), it is capable of remarkable decision-making based on the collective operations of millions of neurons at a time in ever-evolving neural circuitry. Here we use molecular switches to build an assembly where each molecule communicates-like neurons-with many neighbors simultaneously. The assembly's ability to reconfigure itself spontaneously for a new problem allows us to realize conventional computing constructs like logic gates and Voronoi decompositions, as well as to reproduce two natural phenomena: heat diffusion and the mutation of normal cells to cancer cells. This is a shift from the current static computing paradigm of serial bit-processing to a regime in which a large number of bits are processed in parallel in dynamically changing hardware.Comment: 25 pages, 6 figure

    On the validity of entropy production principles for linear electrical circuits

    Full text link
    We discuss the validity of close-to-equilibrium entropy production principles in the context of linear electrical circuits. Both the minimum and the maximum entropy production principle are understood within dynamical fluctuation theory. The starting point are Langevin equations obtained by combining Kirchoff's laws with a Johnson-Nyquist noise at each dissipative element in the circuit. The main observation is that the fluctuation functional for time averages, that can be read off from the path-space action, is in first order around equilibrium given by an entropy production rate. That allows to understand beyond the schemes of irreversible thermodynamics (1) the validity of the least dissipation, the minimum entropy production, and the maximum entropy production principles close to equilibrium; (2) the role of the observables' parity under time-reversal and, in particular, the origin of Landauer's counterexample (1975) from the fact that the fluctuating observable there is odd under time-reversal; (3) the critical remark of Jaynes (1980) concerning the apparent inappropriateness of entropy production principles in temperature-inhomogeneous circuits.Comment: 19 pages, 1 fi

    Leaderless deterministic chemical reaction networks

    Get PDF
    This paper answers an open question of Chen, Doty, and Soloveichik [1], who showed that a function f:N^k --> N^l is deterministically computable by a stochastic chemical reaction network (CRN) if and only if the graph of f is a semilinear subset of N^{k+l}. That construction crucially used "leaders": the ability to start in an initial configuration with constant but non-zero counts of species other than the k species X_1,...,X_k representing the input to the function f. The authors asked whether deterministic CRNs without a leader retain the same power. We answer this question affirmatively, showing that every semilinear function is deterministically computable by a CRN whose initial configuration contains only the input species X_1,...,X_k, and zero counts of every other species. We show that this CRN completes in expected time O(n), where n is the total number of input molecules. This time bound is slower than the O(log^5 n) achieved in [1], but faster than the O(n log n) achieved by the direct construction of [1] (Theorem 4.1 in the latest online version of [1]), since the fast construction of that paper (Theorem 4.4) relied heavily on the use of a fast, error-prone CRN that computes arbitrary computable functions, and which crucially uses a leader.Comment: arXiv admin note: substantial text overlap with arXiv:1204.417

    Programmability of Chemical Reaction Networks

    Get PDF
    Motivated by the intriguing complexity of biochemical circuitry within individual cells we study Stochastic Chemical Reaction Networks (SCRNs), a formal model that considers a set of chemical reactions acting on a finite number of molecules in a well-stirred solution according to standard chemical kinetics equations. SCRNs have been widely used for describing naturally occurring (bio)chemical systems, and with the advent of synthetic biology they become a promising language for the design of artificial biochemical circuits. Our interest here is the computational power of SCRNs and how they relate to more conventional models of computation. We survey known connections and give new connections between SCRNs and Boolean Logic Circuits, Vector Addition Systems, Petri Nets, Gate Implementability, Primitive Recursive Functions, Register Machines, Fractran, and Turing Machines. A theme to these investigations is the thin line between decidable and undecidable questions about SCRN behavior

    Evolution of associative learning in chemical networks

    Get PDF
    Organisms that can learn about their environment and modify their behaviour appropriately during their lifetime are more likely to survive and reproduce than organisms that do not. While associative learning – the ability to detect correlated features of the environment – has been studied extensively in nervous systems, where the underlying mechanisms are reasonably well understood, mechanisms within single cells that could allow associative learning have received little attention. Here, using in silico evolution of chemical networks, we show that there exists a diversity of remarkably simple and plausible chemical solutions to the associative learning problem, the simplest of which uses only one core chemical reaction. We then asked to what extent a linear combination of chemical concentrations in the network could approximate the ideal Bayesian posterior of an environment given the stimulus history so far? This Bayesian analysis revealed the ’memory traces’ of the chemical network. The implication of this paper is that there is little reason to believe that a lack of suitable phenotypic variation would prevent associative learning from evolving in cell signalling, metabolic, gene regulatory, or a mixture of these networks in cells

    Temporal patterns in artificial reaction networks.

    Get PDF
    The Artificial Reaction Network (ARN) is a bio-inspired connectionist paradigm based on the emerging field of Cellular Intelligence. It has properties in common with both AI and Systems Biology techniques including Artificial Neural Networks, Petri Nets, and S-Systems. This paper discusses the temporal aspects of the ARN model using robotic gaits as an example and compares it with properties of Artificial Neural Networks. The comparison shows that the ARN based network has similar functionality

    Mixed Climatology, Non-synoptic Phenomena and Downburst Wind Loading of Structures

    Get PDF
    Modern wind engineering was born in 1961, when Davenport published a paper in which meteorology, micrometeorology, climatology, bluff-body aerodynamics and structural dynamics were embedded within a homogeneous framework of the wind loading of structures called today \u201cDavenport chain\u201d. Idealizing the wind with a synoptic extra-tropical cyclone, this model was so simple and elegant as to become a sort of axiom. Between 1976 and 1977 Gomes and Vickery separated thunderstorm from non-thunderstorm winds, determined their disjoint extreme distributions and derived a mixed model later extended to other Aeolian phenomena; this study, which represents a milestone in mixed climatology, proved the impossibility of labelling a heterogeneous range of events by the generic term \u201cwind\u201d. This paper provides an overview of this matter, with particular regard to the studies conducted at the University of Genova on thunderstorm downbursts

    Optimization of Enzymatic Biochemical Logic for Noise Reduction and Scalability: How Many Biocomputing Gates Can Be Interconnected in a Circuit?

    Full text link
    We report an experimental evaluation of the "input-output surface" for a biochemical AND gate. The obtained data are modeled within the rate-equation approach, with the aim to map out the gate function and cast it in the language of logic variables appropriate for analysis of Boolean logic for scalability. In order to minimize "analog" noise, we consider a theoretical approach for determining an optimal set for the process parameters to minimize "analog" noise amplification for gate concatenation. We establish that under optimized conditions, presently studied biochemical gates can be concatenated for up to order 10 processing steps. Beyond that, new paradigms for avoiding noise build-up will have to be developed. We offer a general discussion of the ideas and possible future challenges for both experimental and theoretical research for advancing scalable biochemical computing
    corecore